Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Federal and local agencies have identified a need to create building databases to help ensure that critical infrastructure and residential buildings are accounted for in disaster preparedness and to aid the decision-making processes in subsequent recovery efforts. To respond effectively, we need to understand the built environment—where people live, work, and the critical infrastructure they rely on. Yet, a major discrepancy exists in the way data about buildings are collected across the United SStates There is no harmonization in what data are recorded by city, county, or state governments, let alone at the national scale. We demonstrate how existing open-source datasets can be spatially integrated and subsequently used as training for machine learning (ML) models to predict building occupancy type, a major component needed for disaster preparedness and decision -making. Multiple ML algorithms are compared. We address strategies to handle significant class imbalance and introduce Bayesian neural networks to handle prediction uncertainty. The 100-year flood in North Carolina is provided as a practical application in disaster preparedness.more » « lessFree, publicly-accessible full text available January 1, 2026
-
Abstract Effective flood prediction supports developing proactive risk management strategies, but its application in ungauged basins faces tremendous challenges due to limited/no streamflow record. This study investigates the potential for integrating streamflow derived from synthetic aperture radar (SAR) data and U.S. National Water Model (NWM) reanalysis estimates to develop improved predictions of above-normal flow (ANF) over the coterminous US. Leveraging the SAR data from the Global Flood Detection System to estimate the antecedent conditions using principal component regression, we apply the spatial-temporal hierarchical model (STHM) using NWM outputs for improving ANF prediction. Our evaluation shows promising results with the integrated model, STHM-SAR, significantly improving NWE, especially in 60% of the sites in the coastal region. Spatial and temporal validations underscore the model’s robustness, with SAR data contributing to explained variance by 24% on average. This approach not only improves NWM prediction, but also uniquely combines existing remote sensing data with national-scale predictions, showcasing its potential to improve hydrological modeling, particularly in regions with limited stream gauges.more » « less
-
Abstract Floods cause hundreds of fatalities and billions of dollars of economic loss each year in the United States. To mitigate these damages, accurate flood prediction is needed for issuing early warnings to the public. This situation is exacerbated in larger model domains flood prediction, particularly in ungauged basins. To improve flood prediction for both gauged and ungauged basins, we propose a spatio‐temporal hierarchical model (STHM) using above‐normal flow estimation with a 10‐day window of modeled National Water Model (NWM) streamflow and a variety of catchment characteristics as input. The STHM is calibrated (1993–2008) and validated (2009–2018) in controlled, natural, and coastal basins over three broad groups, and shows significant improvement for the first two basin types. A seasonal analysis shows the most influential predictors beyond NWM streamflow reanalysis are the previous 3‐day average streamflow and the aridity index for controlled and natural basins, respectively. To evaluate the STHM in improving above‐normal streamflow in ungauged basins, 20‐fold cross‐validation is performed by leaving 5% of sites. Results show that the STHM increases predictive skill in over 50% of sites' by 0.1 Nash‐Sutcliffe efficiency (NSE) and improves over 65% of sites' streamflow prediction to an NSE > 0.67, which demonstrates that the STHM is one of the first of its kind and could be employed for flood prediction in both gauged and ungauged basins.more » « less
-
Abstract With an increasing number of continental‐scale hydrologic models, the ability to evaluate performance is key to understanding uncertainty and making improvements to the model(s). We hypothesize that any model, running a single set of physics, cannot be “properly” calibrated for the range of hydroclimatic diversity as seen in the contenintal United States. Here, we evaluate the NOAA National Water Model (NWM) version 2.0 historical streamflow record in over 4,200 natural and controlled basins using the Nash‐Sutcliffe Efficiency metric decomposed into relative performance, and conditional, and unconditional bias. Each of these is evaluated in the contexts of meteorologic, landscape, and anthropogenic characteristics to better understand where the model does poorly, what potentially causes the poor performance, and what similarities systemically poor performing areas share. The primary objective is to pinpoint traits in places with good/bad performance and low/high bias. NWM relative performance is higher when there is high precipitation, snow coverage (depth and fraction), and barren area. Low relative skill is associated with high potential evapotranspiration, aridity, moisture‐and‐energy phase correlation, and forest, shrubland, grassland, and imperviousness area. We see less bias in locations with high precipitation, moisture‐and‐energy phase correlation, barren, and grassland areas and more bias in areas with high aridity, snow coverage/fraction, and urbanization. The insights gained can help identify key hydrological factors underpinning NWM predictive skill; enforce the need for regionalized parameterization and modeling; and help inform heterogenous modeling systems, like the NOAA Next Generation Water Resource Modeling Framework, to enhance ongoing development and evaluation.more » « less
-
Abstract A digital map of the built environment is useful for a range of economic, emergency response, and urban planning exercises such as helping find places in app driven interfaces, helping emergency managers know what locations might be impacted by a flood or fire, and helping city planners proactively identify vulnerabilities and plan for how a city is growing. Since its inception in 2004, OpenStreetMap (OSM) sets the benchmark for open geospatial data and has become a key player in the public, research, and corporate realms. Following the foundations laid by OSM, several open geospatial products describing the built environment have blossomed including the Microsoft USA building footprint layer and the OpenAddress project. Each of these products use different data collection methods ranging from public contributions to artificial intelligence, and if taken together, could provide a comprehensive description of the built environment. Yet, these projects are still siloed, and their variety makes integration and interoperability a major challenge. Here, we document an approach for merging data from these three major open building datasets and outline a workflow that is scalable to the continental United States (CONUS). We show how the results can be structured as a knowledge graph over which machine learning models are built. These models can help propagate and complete unknown quantities that can then be leveraged in disaster management.more » « less
-
Abstract Estimating uncertainty in flood model predictions is important for many applications, including risk assessment and flood forecasting. We focus on uncertainty in physics‐based urban flooding models. We consider the effects of the model's complexity and uncertainty in key input parameters. The effect of rainfall intensity on the uncertainty in water depth predictions is also studied. As a test study, we choose the Interconnected Channel and Pond Routing (ICPR) model of a part of the city of Minneapolis. The uncertainty in the ICPR model's predictions of the floodwater depth is quantified in terms of the ensemble variance using the multilevel Monte Carlo (MC) simulation method. Our results show that uncertainties in the studied domain are highly localized. Model simplifications, such as disregarding the groundwater flow, lead to overly confident predictions, that is, predictions that are both less accurate and uncertain than those of the more complex model. We find that for the same number of uncertain parameters, increasing the model resolution reduces uncertainty in the model predictions (and increases the MC method's computational cost). We employ the multilevel MC method to reduce the cost of estimating uncertainty in a high‐resolution ICPR model. Finally, we use the ensemble estimates of the mean and covariance of the flood depth for real‐time flood depth forecasting using the physics‐informed Gaussian process regression method. We show that even with few measurements, the proposed framework results in a more accurate forecast than that provided by the mean prediction of the ICPR model.more » « less
An official website of the United States government
